First, we install the packages that we need to use.

library(pacman)
pacman::p_load(leaflet, tidyverse, htmltools, terra, sf, scales, glue, readxl, stringi)

Cleaning Datasets

Now, we import the data sets that we are going to use. The deforestation data was collected by the Global Forest Watch and the specific data for Colombia can be found here. The geographic files for the municipalities of Colombia are shared by Laboratorio Urbano - Bogotá and they can be accessed here. Although, the files come in different formats, we decided to used the Shapefile format for this exercise.

deforestation <- read_xlsx("COL.xlsx", 
                           sheet = "Subnational 2 tree cover loss") %>%
  distinct(subnational2, .keep_all = TRUE) %>%
  mutate(subnational2 = toupper(subnational2),
         subnational1 = toupper(subnational1),
         pct_gain = `gain_2000-2020_ha`/area_ha) %>%
  rename(nombre_mpi = subnational2,
         nombre_dpt = subnational1) %>%
  arrange(nombre_mpi)

map <- sf::st_read("shapes/shapes.shp", 
                   stringsAsFactors = FALSE) %>%
  arrange(nombre_mpi)
Reading layer `shapes' from data source 
  `C:\Users\cpedr\OneDrive - Hertie School\GitHub\09-leaflet-satam-jimenez-hossain\shapes\shapes.shp' 
  using driver `ESRI Shapefile'
Simple feature collection with 1122 features and 9 fields
Geometry type: POLYGON
Dimension:     XY
Bounding box:  xmin: -81.73899 ymin: -4.2473 xmax: -66.874 ymax: 13.39744
Geodetic CRS:  WGS 84

Since we are working with data from Colombia, some of the municipalities for the “deforestation” dataset include the accent. Thus, we use the stri_trans_general() function from the package stringi. However, since the Spanish language also uses the Ñ and the Ü, we do not want ot get rid of those. Therefore, we need to create a function:

# Function to remove accents
remove_accents <- function(input_string) {
  paste(sapply(strsplit(input_string, "")[[1]], function(x) ifelse(x %in% c("Ñ", "Ü"), x, stringi::stri_trans_general(x, "Latin-ASCII"))), collapse = "")
}

# Apply the function to the vector
deforestation$nombre_mpi <- sapply(deforestation$nombre_mpi, remove_accents)
deforestation$nombre_dpt <- sapply(deforestation$nombre_dpt, remove_accents)

Merging Datasets

After cleaning the data, we need to merge the different files. There are 91 municipalities with no data available in the deforestation dataset

#Merging data
deforestation_map <- full_join(map, deforestation, by = c("nombre_mpi", "nombre_dpt"))

#No matches
not_m <- map %>%
  anti_join(deforestation, by = c("nombre_mpi", "nombre_dpt")) %>%
  arrange(nombre_mpi)

Getting creative with the map

Now that we have the data needed for our map, we can create a palette of colors and some tags

#The palette of colors is based on a scale of greens that changes based on the gained ha of trees between 2002 and 2020 by municipality
palette <- colorNumeric(palette = "Greens", domain = deforestation_map$`gain_2000-2020_ha`)

#We create some tags by municipality using the glue and the htmltools packages
def_popup <- glue("<strong>{deforestation_map$nombre_mpi}</strong><br />
                    <strong>Department: {deforestation_map$nombre_dpt}</strong><br />
                    Total area (ha): {deforestation_map$area_ha}<br /> 
                    Gained ha of trees (2002-2020): {scales::comma(deforestation_map$`gain_2000-2020_ha`, accuracy = 1)}<br />
                    Tree cover loss 2022 (ha): {scales::comma(deforestation_map$tc_loss_ha_2022, accuracy = 1)}<br />")  %>%   
  lapply(htmltools::HTML)

#Final map
leaflet() %>%
  addProviderTiles("OpenStreetMap") %>%
  addPolygons(
    data = deforestation_map,
    fillColor = ~palette(deforestation_map$`gain_2000-2020_ha`),
    label = def_popup,
    fillOpacity = 1,
    color = "grey",
    weight = 1
  )